Stable release | 2.0 / February 17, 2010 |
---|---|
Written in | Java |
Operating system | Linux/Unix-like/Windows(unsupported) |
Type | Web crawler |
License | BSD |
Website | http://web-harvest.sourceforge.net/ |
Web harvesting is commonly used to describe Web scraping from a multitude of sites.[1] It also refers to an implementation of a Web crawler that uses human expertise or machine guidance to direct the crawler to URLs which compose a specialized collection or set of knowledge. Web harvesting can be thought of as focused or directed Web crawling.
Contents |
Web harvesting allows Web-based search and retrieval applications, commonly referred to as search engines, to index content that is pertinent to the audience for which the harvest is intended. Such content is thus virtually integrated and made searchable as a separate Web application. General purpose search engines, such as Google and Yahoo! index all possible links they encounter from the origin of their crawl. In contrast, search engines based on Web harvesting only index URLs to which they are directed. This implementation strategy has the effect of creating a searchable application that is faster, due to the reduced size of the index; and one that provides higher quality and more selective results since the indexed URLs are pre-filtered for the topic or domain of interest. In effect, harvesting makes otherwise isolated islands of information searchable as if they were an integrated whole.
Another common purpose of Web harvesting is to supply content to vertical search engines.
Web Harvesting begins by identifying and specifying as input to a computer program a list of URLs that define a specialized collection or set of knowledge. The computer program then begins to download this list of URLs. Embedded hyperlinks that are encountered can be either followed or ignored, depending on human or machine guidance. A key differentiation between Web harvesting and general purpose Web crawlers is that for Web harvesting, crawl depth will be defined and the crawls need not recursively follow URLs until all links have been exhausted. The downloaded content is then indexed by the search engine application and offered to information customers as a searchable Web application. Information customers can then access and search the Web application and follow hyperlinks to the original URLs that meet their search criteria.
Focused web harvesting is similar to the targeted web crawler. Instead of letting the general purpose crawler harvest the web, the mechanism works under certain pre-defined conditions to specify the information.[2][3] Especially this mechanism is intended to realize an indirect data integration. An implementation of this kind of data integration can be found at the Indonesian Scientific Index - ISI which integrates all information related to the science and technology in Indonesia.[4]